![]() SERVICE PAIRING CENTER
专利摘要:
this disclosure describes techniques for providing initial acknowledgments to a source device that performs a data write operation within a data center or across a geographically distributed data center. in one example, this disclosure describes a method that includes receiving, from a source device within a local data center, data to be stored on a target device that is located within a remote data center; store the data in high-speed memory included in the portal device; after storing the data, issuing a local acknowledgment to the source device and before the data is stored on the target device; transmit the data over a connection to the remote data center; receive, from a device within the remote data center, a remote acknowledgment; and in response to receiving the remote acknowledgment, removing the data from the high-speed memory included in the portal device. 公开号:BR112019026003A2 申请号:R112019026003-0 申请日:2018-06-13 公开日:2020-06-23 发明作者:Anthony Madden Stephen;Stephen Anthony Madden 申请人:Equinix, Inc.; IPC主号:
专利说明:
[0001] [0001] The application claims the priority benefit of United States Interim Application Serial Number 62/518,992, filed June 13, 2017, the contents of which are incorporated herein in their entirety. TECHNICAL FIELD [0002] [0002] The disclosure refers to computer networks and, more specifically, a service pairing center to create and manage service-to-service paths between applications provided by computer networks. BACKGROUND [0003] [0003] As digital services become more dominant, such services interact in an automated way to provide connectivity between companies. Productized Application Programming Interfaces (APIs) for accessing enterprise services are becoming the new digital storefront, and companies typically employ API portals accessible to the public Internet to provide a single, controlled, and reliable entry point for their internal system architectures. SUMMARY [0004] [0004] In general, this disclosure describes a service peering hub to create and manage service-to-service paths between applications. For example, a service peering center with network connectivity to multiple networks can receive application programming interface data. [0005] [0005] Based on the policies, the service peering center can logically segment the shared services bandwidth provided through the service peering center and route service requests to the appropriate service endpoints. The service peering center may also, for some service requests, verify that the service requester is authorized and enforce service level guidelines before routing service requests to service endpoints, in accordance with directives configured for each of the target service endpoints. [0006] [0006] Each instance of a service that exchanges service traffic with the service peering center exposes a remote API at a transport layer network address and port that can be advertised and available using a service directory, a transport layer services, and higher layers of the protocol stack. One or more of the services may, at least in some cases, be accessible through a service portal (or “API portal”) of the service provider that operates as a public interface to the remote APIs at a network address and port of the portal. of service. The service peering center may include or access a service registry to obtain respective network addresses and ports to access services at the various service endpoints of multiple service provider networks. Service Peering Center publishes registered APIs accessible to Service Peering Center customers and provides network layer and higher layer connectivity for clients to APIs at a service peering center network address and ports accessible to customers in an access link. [0007] [0007] In some examples, to route incoming service requests on the service peering center's network address and ports to the appropriate service endpoints, the service peering center performs service level mapping and forwards the service sessions between (i) the requesting application and the service peering center and (ii) the service peering center and service endpoints. In some examples, the service peering hub may have connectivity to a cloud hub or other network service hub that allows one-to-many connectivity between the service peering hub and the networks hosting the services. In some examples, to route service requests, the service peering center applies policies to allow linking (i.e., Layer 2 forwarding) of service requests to service endpoints that are visible at the network layers (layer 2/layer 3) for demanding applications. In some examples, the service peering center may orchestrate service chains for the services by routing service requests according to one or more policies. [0008] [0008] The service peering center techniques described here may have one or more technical advantages. For example, upon establishing service sessions or bridging communications, the service peering center can allow multiple applications to exchange service requests and responses without requiring any direct, dedicated network layer connectivity between networks running the services. service instances that communicate with each other. In this way, the service peering center can replace interconnected networks so that customer networks can remain unconnected with each other except through the service peering center and for service traffic only. This can avoid the need for customers to purchase or otherwise establish direct or virtual connectivity between customers using traversal connections or virtual connections such as virtual private networks or virtual circuits from a cloud hub, Internet hub, or Ethernet hub. Reducing or eliminating direct or virtual connectivity between clients can facilitate lower latency service traffic and can simplify configuration of and reduce a load on networks by reducing resources and/or utilizing resources typically required to facilitate such network connectivity, such as network links, firewalls, memory resources of network devices, and so on. The service pairing portal can also provide a centralized location for multiple service endpoints to perform endpoint-specific (or at least customer-specific) requester verification, packet and security inspection, API, data collection and analysis. [0009] [0009] In one example, a method includes receiving, via a service peering center run by one or more computing devices and at a first service exchange endpoint from the service peering center, a first request arriving service request from a first network of clients, wherein the first arriving service request is destined for the first service exchange endpoint, and wherein the first arriving service request may invoke a programming interface application of a first application; issuing, through the service pairing center in response to receipt of the first inbound service request, a first outbound service request destined for a service termination point of a second network of clients executing the first application, in that the first outgoing service request can invoke the application programming interface of the first application; receive, through the peer service exchange and at a second service exchange endpoint of the service peer exchange that is different from the first exchange service endpoint, a second incoming service request from the peer service first network of clients, wherein the second incoming service request is destined for the second service exchange endpoint, and wherein the second incoming service request may invoke an application programming interface of a second application; and issuing, through the service pairing center in response to receipt of the second inbound service request, a second outbound service request destined for a service termination point of a third network of clients executing the second application, where the second outbound service request can invoke the second application's application programming interface. [0010] [0010] In another example, a service exchange system comprises one or more service pairing centers configured to run through a service pairing center platform comprising one or more computing devices; and a service portal for an application configured to run through the first customer network, the service portal configured to run by a computing device of the first customer network, wherein one or more service pairing centers are configured to receive, at a service exchange endpoint, an inbound service request from a second network of customers, where the inbound service request is destined for the service exchange endpoint, and where the incoming service request may invoke an application programming interface, the application configured to run through the first network of clients, which one or more service peering centers are configured to, in response to the receipt of the service request. inbound service, output of an outbound service request destined for a service endpoint of the service portal, where the only Outbound service bidding may invoke the application programming interface, of the application configured to run through the first client network, where the service portal is configured to receive the second service request at the service termination point and direct the second service request to the application. [0011] [0011] In another example, a service exchange system comprises one or more service pairing centers configured to run through a service pairing center platform comprising one or more computing devices, wherein the one or more service pairing switches are configured to receive, at a first service exchange endpoint, a first arriving service request from a first customer network, where the first arriving service request is destined for the first service exchange endpoint, and where the first arriving service request may invoke an application programming interface of a first application, where the one or more service peering centers are configured to issue, in response to receiving of the first inbound service request, a first outbound service request destined for a point of arrival service delivery of a second network of clients running the first application, wherein the first outbound service request may invoke the application programming interface of the first application, wherein one or more service pairing centers are configured to receive , at a second service exchange endpoint that is different from the first service exchange endpoint, a second incoming service request from the first customer network, where the second incoming service request is destined to the second service exchange endpoint and wherein the second incoming service request may invoke an application programming interface of a second application, and wherein the one or more service peering switches are configured to issue, in response to receipt of second inbound service request, a second outbound service request destined for an endpoint service delivery of a third network of clients running the second application, wherein the second outgoing service request may invoke the application programming interface of the second application. [0012] [0012] Details of one or more examples are set out in the attached drawings and in the description below. Other features, objects and advantages will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF THE DRAWINGS [0013] [0013] Figure 1 is a block diagram illustrating an exemplary service exchange system for creating and managing service-to-service paths between applications accessible at multiple different service endpoints, in accordance with techniques of this disclosure. [0014] [0014] Figures 2A-2B are block diagrams illustrating an example of a cloud exchange point that is configurable through a programmable networking platform to establish network connectivity between a service peering center and multiple customer networks. to enable service-to-service communication [0015] [0015] Figure 3 is a block diagram illustrating an exemplary service exchange system, in accordance with the techniques of this disclosure. [0016] [0016] Figure 4 is an exemplary service map, according to the techniques of this disclosure. [0017] [0017] Figure 5 is a block diagram illustrating a conceptual view of a service exchange system having a metropolitan cloud exchange that provides multiple cloud exchange points for communication with a service pairing center, in accordance with the techniques described here. [0018] [0018] Figure 6 is a flowchart illustrating an exemplary mode of operation for a service pairing center, in accordance with the techniques of this disclosure. [0019] [0019] Figure 7 is a block diagram that illustrates an example of a service exchange system, distributed according to the techniques described here. [0020] [0020] Similar reference characters denote similar elements throughout the figures and text. DETAILED DESCRIPTION [0021] [0021] Figure 1 is a block diagram illustrating an exemplary service exchange system for creating and managing service-to-service paths between applications accessible at multiple, different service endpoints in accordance with techniques of this disclosure. . Service exchange system 100 includes multiple customer networks 108A through 108C (collectively, "customer networks 108") for the respective customers of a service peering center provider 101. [0022] [0022] Each of the customer networks 108 may represent one of an enterprise network; a cloud service provider network; a private, public, or hybrid cloud network; and a network of tenants within a cloud network, for example. Each of the client networks 108 is a Layer 3 network and may include one or more non-edge switches, routers, hubs, portals, security devices such as firewalls, intrusion detection, and/or intrusion prevention devices, servers, computer terminals, laptops, printers, databases, wireless mobile devices such as cell phones or personal digital assistants, wireless access points, bridges, cable modems, application accelerators, or other network devices. [0023] [0023] Each of the client networks 108 includes one or more hosting servers (not shown) that individually run an instance of at least one of the applications 110A-110D (collectively, "applications 110"). For example, one or more client network hosting servers 108A run a service application instance 110A, and the service instance processes incoming service requests at the hosting server's network address and port assigned to the service instance. service. As another example, one or more client web hosting servers 108C individually run a service application instance 110C and/or a service application instance 110D. [0024] [0024] Hosting servers may include compute servers, storage servers, application servers, or other computing devices to run applications that process requests for services received over a network. Hosting servers can represent real servers or virtual servers, such as virtual machines, containers, or other virtualized execution environments. [0025] [0025] Applications 110 offer services such as data storage, eCommerce, billing, marketing, customer relationship management (CRM), social media, digital media, financial, time, search and other services accessible using machine communication -to-machine via the corresponding client network 108. Each of the applications 110 may represent a different service. Each service instance hosted by a hosting server exposes a remote application programming interface (API) on a network address and port of the hosting server. The network address and port combination mapped to a service instance running by a hosting server is referred to as a “service endpoint” and more specifically in this example where service instances are logically situated behind the service portals. service 112, an "internal service termination point". For example, a 110D service application instance processes service requests received on a network address and port of the hosting server running the service instance, service requests that conform to the 110D application's API. Service requests may be referred to as “API requests”. [0026] [0026] The services offered through the applications 110 may alternatively be referred to as "network services" in which the services communicate with other computing services, application and messaging protocols, and other emerging protocols developed at least in part to the worldwide network, such as HyperText Transfer Protocol (HTTP) or Simple Mail Transfer Protocol (SMTP), and operation over Internet Protocol networks. Services can operate under different service frameworks, such as Apache Axis, Java Web Services, Windows Communication Foundation (WCF), and .NET Framework, each of which makes use of one or more network service protocols for communication. service data between machines. Exemplary network service protocols include JavaScript Object Notation (JSON)-Remote Procedure Call (RPC), full-service Representational State Transfer (REST), Simple Object Access Protocol (SOAP), Apache Thrift, extensible Markup Language (XML)- RPC, Message Queue Telemetry Transport (MQTT), Rabbit Message Queue (RabbitMQ), and Constrained Application Protocol (CoAP), and Web Services Description Language (WSDL). [0027] [0027] In this example, administrators of customer networks 108 employ respective service portals 112 to the customer networks to expose internal APIs of service instances to customers external to the customer networks. [0028] [0028] As used herein, the term “routing” and similar terms for the delivery of service requests to the intended destination may include Layer 2/Layer 3 forwarding, but may include, in addition to or, alternatively, the application of policies to identify, approve, and issue service requests to their intended destinations in accordance with service protocols for service requests. Outbound service requests can be routed in their original form or modified, for example, to direct service requests from an incoming service exchange endpoint to a service application endpoint, which can be reachable through a service endpoint of a service portal. [0029] [0029] In addition to forwarding service requests between requesters (i.e. service request issuers) and internal service endpoints, each of the service portals 112 can also verify that requesters are authorized to do so. requests, prevent access to internal service endpoints by unauthorized requestors, perform load balancing across multiple service instances for 110 applications, regulate service requests and/or translate service requests network service protocols received for other network service protocols (eg transforming RESTful protocols into SOAP) before routing service requests, for example. Each of the service gateways 112 may use a service discovery mechanism to identify internal service endpoints for a service offered by an application 110, and forward service requests for the service to the internal service endpoints. Exemplary service discovery mechanisms include client-side discovery and server-side discovery. Service portals 112 thus provide external APIs to reach internal service endpoints for 110 applications. [0030] [0030] Service discovery can occur at the service layer, as the service layer typically describes and provides business and service capabilities for services offered in compliance with one or more network service protocols. Services offered through applications 110 to corresponding clients 108 are associated with service endpoints 114. Service discovery information obtained through service peering center 101 and advertised to service gateways 112 for discovery of services by service gateways 112, may be stored via service peering 101 in association with service endpoints. [0031] [0031] Customer networks 108 are coupled to a service pairing center 101 via respective communication links 103A-103C (collectively, "communication links 103"). Customer networks 108 and service peering center 101 exchange data communications over communication links 103, each of which may represent at least one Ethernet link, Asynchronous Transfer Mode (ATM) link, and SONET/SDH link, for example. Communication links 103 may each represent a Layer 2 network, such as a local area network or virtual local area network (VLAN). Data communications may conform to the Open System Interconnect (OSI) model, Transmission Control Protocol (TCP)/Internet Protocol (IP) model, or User Datagram Protocol (UDP)/IP model. Data communications can include a layer 3 (i.e. network layer) packet having an internet protocol header that includes source and destination network addresses and layer 4 (e.g. transport layer protocol such as source of TCP/UDP) and destination ports. The Internet Protocol header also specifies the transport layer protocol for data communication. [0032] [0032] Client networks 108 may not have network connectivity to each other. That is, a device on the 108A client network may be unable to send a network packet (Layer 3) to a device on the 108B client network or to a device on the 108C client network, so there is no physical or virtual network. configured to route network packets between customer networks 108. In some cases, customer networks 108 may have network connectivity to each other only over communication links other than communication links 103. [0033] [0033] Service peering switch 101 obtains, for example, using service discovery, service termination data describing service endpoints 114 for APIs exposed by service portals 112. Service termination data may include network address and port information for service endpoints 114. Service peering center 101 can perform service discovery to obtain service termination data from service registrars to service gateways, for example, such as sending service discovery messages to service portals. API description data can describe protocols, methods, API endpoints, etc., that define acceptable service requests for service endpoints to interact with services for applications 110. API description data can be formatted with WSDL. [0034] [0034] In accordance with the techniques described in that disclosure, the service pairing center 101 enables interservice communication between applications 110 running by different client networks 108, creating and managing service-to-service paths between applications. In this example, service peering switch 101 enables service exchange endpoints 106 to send and receive service traffic with customer networks 108. As used here, "service traffic" can refer to service requests invoking application programming interfaces of service instances, as well as responses to such service requests (or “service responses”). [0035] [0035] Each service exchange endpoint 106 is a network address and a port pair that is internally mapped, through the service peering center 101 using service mapping data, to one of the service endpoints 101. service 114 for services provided through applications 110 running through customer networks 108. Service peering center 101 receives service requests issued, for example, through applications 110 at service peering endpoints 106 Service exchange outlets 101 in response issue corresponding service requests addressed to service endpoint 114 on a different client network 108 to which destination service peer endpoints 106 are mapped. . In this way, service peering 101 allows service-to-service communication between applications running over client networks 108 that do not have a dedicated, network-layer connection directly to each other. Service mapping data can be obtained via the service pairing center 101 in real time by performing service mapping resolution for load balancing service requests between portals/service applications that are members of a group. [0036] [0036] In the example of Figure 1, service peering switch 101 maps service exchange endpoints 106A-106D to respective service endpoints 114A-114D of multiple customer networks 108, in that example, the service endpoints 114 being accessible on service gateways 112. For example, service peering exchange 101 maps service exchange endpoint 106A exposed via service peering exchange 101 to service exchange endpoint 106A exposed via service peering endpoint 101 service 114A exposed through service portal 112A of customer network 108A and usable to access application API 110A. Service peering exchange 101 maps service exchange endpoint 106B exposed through service peering exchange 101 to service endpoint 114B exposed through service gateway 112B of customer network 108B and usable to access the 110B application API. Service peering exchange 101 maps service exchange endpoints 106C, 106D exposed through service peering exchange 101 to service endpoints 114C, 114D exposed through service gateway 112C of the service network. 108C clients and usable to access APIs from 110C, 110D applications. Consequently, and as described in greater detail below, service pairing switch 101 allows applications 110B-110D running on customer networks 108B, 108C to issue service requests to application 110A, even though customer networks 108 do not have network connectivity to each other. [0037] [0037] Service peering switch 101 receives a service request 124A from client network 108A. Service request 124A has a destination network address and destination port that match the network address and port of service exchange endpoint 106C. Service request 124A may conform to a network service protocol, such as any of the network service protocols listed above. For example, service request 124A may represent REST communication using HTTP, SOAP communication, or other type of service request that invokes an API for application 110C. That is, service instances of application 110C would recognize service request 124A as a service request that invokes API for application 110C. Service request 124A may be generated via a service application instance 110A and issued from a client network computing device 108C running the service instance. Service request 124A includes service data to invoke an API offered through a service application instance 110C. [0038] [0038] Service peering switch 101 maps service request 124A received at service exchange endpoint 106C to service endpoint 114C and generates a new outgoing service request 124A'. Outbound service request 124A' includes service data from service request 124A and includes a Layer 4 header and a Layer 3 header that causes the outbound service request 124A' to be received at the service endpoint 114C exposed through the service portal 112C. In other words, service peering switch 101 rewrites at least the destination network address and service request destination port 124A, which is destined for service exchange endpoint 106C, to generate and issue the request. service termination point 124A', which is destined for service termination point 114C. Service peering exchange 101 may also generate outgoing service request 124A' to have an originating service endpoint as service exchange endpoint 106A mapped through service peering exchange 101 to service peering endpoint 106A. service termination 114A. Service pairing switch 101 issues outgoing service request 124A' via communication link 103C. Service gateway 112C receives outgoing service request 124A' at service termination point 114C. Service peering exchange 101 may host a transport-layer session (e.g. TCP) between service peering exchange 101 and a service application instance 110A and a transport-layer session between service peering exchange 101 and a service application instance 110A. services 101 and a service application instance 110C. In this way, service peering center 101 creates a service-to-service path for service requests and service responses between a service instance for application 110A and a service instance for application 110C, despite the of clients 108 not having connectivity between networks. [0039] [0039] Service gateway 112C sends outbound service request 124A' for processing via a service application instance 110C. In some cases, the service gateway 112C may generate a new outbound service request 124A' with a Layer 4 header and a Layer 3 header having a destination port and a destination address for the application service instance 110C . Application service instance 110C may issue a new service request 125A to application 110B running through client network 108B. Service request 125A is destined for service exchange endpoint 106B of service peer exchange 101. Service peer exchange 101 receives service request 125A at service exchange endpoint 106B. Service peering switch 101 maps service request 125A at service exchange endpoint 106B to service endpoint 114B and generates a new outgoing service request 125A'. To generate new outgoing service request 125A', service pairing switch 101 can apply operations similar to those expressed above to generate outgoing service request 124A'. The service pairing center 101 issues the outgoing service request 125A' over the communication link 103B. Service gateway 112C receives outgoing service request 125A' at service termination point 114C. Service peering hub 101 may host a transport-layer session between service peering hub 101 and a service application instance 110C and a transport-layer session between service peering hub 101 and a service application instance 110C. service application 110B. [0040] [0040] Service gateway 112B sends outbound service request 125A' for processing via a service application instance 110B. In some cases, the service gateway 112B may generate a new outbound service request 125A' with a Layer 4 header and a Layer 3 header having a destination port and a destination address for the service application instance 110B . Application service instance 110B processes outgoing service request 125A' and may generate a service response 125B destined for service exchange endpoint 106C which is indicated as a source service endpoint for the service request 125. [0041] [0041] Service peering switch 101 receives service response 125B at service exchange endpoint 106C and generates outbound service response 125B' based on mapping from service exchange endpoint 106C to service exchange endpoint 106C. 114C service termination point. Outgoing service response 125B' is therefore destined for service termination point 114C. Service pairing center 101 sends outbound service response 125B' over communication link 103C to client network 108C. Service gateway 112C receives outbound service response 125B' at service endpoint 114C and directs outbound service response 125B' to service application instance 110C. Service application instance 110C processes outbound service response 125B'. [0042] [0042] Service application instance 110C may generate service response 124B responsive to outgoing service request 124A'. Service response 124B is destined for service exchange endpoint 106A based on the source endpoint indicated via outgoing service request 124A'. Service peering switch 101 receives service response 124B at service peering point 106A and generates service response 124B' based on mapping from service exchange endpoint 106A to service endpoint 114A. Service response 124B' is therefore destined for service endpoint 114. Service peering exchange 101 sends service response 124B' over communication link 103A to customer network 108A. Service gateway 112A receives service response 124B' at service endpoint 114B and directs service response 124B' to application service instance 110A. Application service instance 110A processes service response 124B'. [0043] [0043] In some examples, each of the service gateways 112 exposes its registered APIs and corresponding service endpoints 114 with service exchange 101. The service gateway 112A may register accessible APIs at the service endpoint 114A, the service gateway 112B may register APIs accessible at service endpoint 114B, and service gateway 112C may register APIs accessible at service endpoint 114C and 114D, for example. In some cases, a customer operating each gateway 112 may register APIs and service endpoints 114 through a gateway application, such as the customer gateway 330 described below with respect to Figures 2A-2B. Service endpoints and service peering endpoints can be indicated in part using Uniform Resource Locators (URLs) or Uniform Resource Identifiers, in part using transport-layer ports, or by explicitly specifying a network address and port from transport-layer to the service termination point, for example. [0044] [0044] Applications 110 may perform service discovery to identify service peering endpoints 106 to access service endpoints 114 through service peering switch 101. Service discovery may occur at the service layer. Service peering center 101 may expose a discovery API, for example using a Uniform Discovery Resource Locator (URL), in order to enable such service discovery by applications 110. For example, application 110A may invoke the Discovery API using a discovery request message that includes a parameter value that indicates the 110C application. Service peer exchange 101 provides a discovery response message that includes the network address and port for service exchange endpoint 106C which is mapped through service peer exchange 101 to service exchange endpoint 106C. 114C service. Accordingly, application 110A may direct service requests to service exchange endpoint 106C for delivery through service peering center 101 to service endpoint 114C exposed through service gateway 112C using the described techniques. above. [0045] [0045] Service peering techniques allow the service peering switch 101 to receive and forward service requests to appropriate service endpoints 114 for respective applications 110 running over multiple networks of different clients 108 despite such networks do not have dedicated network connectivity to each other, at least in some cases. In this way, service peering center 101 can replace an interconnection network such that customer networks 108 can remain unconnected to each other, except through service peering center 101 and only for service traffic. , and thus avoid the need for customers to employ customer networks 108 to purchase or otherwise establish direct or virtual connectivity between customer networks 108 using traversal connections or virtual connections such as virtual private networks. In fact, service peering center 101 substantially abstracts networks by providing service request routing between interclient networks 108 and segmenting services between service portals 112 according to access authorizations between applications 110. [0046] [0046] Furthermore, the service pairing center 101 can provide a neutral service for clients to pair the API services with each other. As business processes become more fluid and intertwined in business ecosystems, customers can bundle and share services in streams. The flow of service requests 124 and 125 from client network 108A to client 108B through client 108C is an example of such a flow. Each service may belong to a different organizational entity (and the digital service component the entity provides), and the flow may represent a new joint business offering. The service pairing hub 101 provides a layered service as the point of intersection of business-to-business digital transactions between two or more customers who have developed respective customer networks 108. As described in more detail below, the peering hub 101 can be distributed to a service pairing hub, such as a cloud hub or an internet hub, and become an open digital business hub for tenants who have access to the service peer hub or who, otherwise, you have applications running over networks that have access to the service peering center. In some cases, one or more tenant customers are placed directly with the service peering center by employing network and computing equipment for customer networks 108 within a physical, data center that houses the service peering center. . One or more tenant customers may also or alternatively be connected indirectly to the service peering hub through a network service provider placed within the physical data center and connected to the peering service hub. [0047] [0047] Figures 2A-2B are block diagrams illustrating an example of a cloud exchange point that is configurable through a programmable networking platform to establish network connectivity between a service peering center and multiple client networks. to allow service-to-service communication between applications running over client networks, in accordance with the techniques in this description. Cloud exchange point 303 is an exemplary implementation of a software-defined wide-area network (PDSN) or software-defined wide-area network (SD-WAN) switching device where a controller (in this example, the programmable network platform 328) manages network configuration for the network to facilitate connectivity between customer networks 308 and service peering center 301. Cloud exchange point 303, customer networks 308, and service center service pairing 301 may represent an exemplary instance of a service exchange system 100. Customer networks 308 may represent exemplary customer networks 108, applications 310 may represent exemplary applications 110, service gateways 312 may represent exemplary service gateways 112, service endpoints 314 may represent exemplary service endpoints 114, and service peering center 301 may represent represent an exemplary service pairing center 101. [0048] [0048] Customer networks 308A-308C (collectively, “Customer networks 308”), each associated with a different customer of the Cloud Exchange Point Provider 303, access a Cloud Exchange Point 303 within a central 300 in order to receive aggregated services from one or more other networks coupled to the cloud exchange point 303. Customer networks 308 individually include the termination devices that provide and/or consume services. Exemplary termination devices include real or virtual servers, [0049] [0049] Customer networks 308A-308B include respective Autonomous System Edge/Provider Edge Routers (PE/ASBRs) 309A-309B. Each of the PE/ASBRs 309A, 309B can run external port routing protocols for pairing with one of the PE routers 302A-302B (“PE routers 302” or more simply “PEs 302”) over one of the access links 316A-316B (collectively, “316 access links”). In the illustrated examples, each of the access links 316 represents a transit link between an edge router of a customer network 308 and an edge router (or autonomous system edge router) of the cloud exchange point 303. For example, PE 309A and PE 302A can be directly connected via an external port protocol, for example, BGP outside, to exchange L3 routes over access link 316A and to exchange L3 data traffic between the clients 308A and cloud service provider networks 320. Access links 316 may, in some cases, represent and alternatively be referred to as pinning circuits for IP-VPNs configured in the IP/MPLS framework 318, as described in greater detail. below. Access links 316 may, in some cases, include a direct physical connection between at least one port of a client network 308 and at least one port of cloud exchange point 303, with no intervening transit network. Access links 316 can operate over a VLAN or a stacked VLAN (eg QinQ), a VxLAN, an LSP, a GRE tunnel, or another type of tunnel. [0050] [0050] While illustrated and primarily described with respect to L3 connectivity between customer networks 308 and service peering center 301, PE routers 302 may additionally or alternatively provide, via access links 316, L2 connectivity between customer networks 308 and service peering center 301. For example, a PE router port 302A can be configured with an L2 interface that provides the customer network 308A with L2 connectivity for service exchange. service 301 over access link 316A, with service pairing center 301 coupled (directly or through another network device) to a port on the PE router 304A which is also configured with an L2 interface. The PE router port 302A can be further configured with an L3 interface that provides, to the customer network 308A, L3 connectivity to the cloud service provider 320B via access links 316A. The PE 302A can be configured with multiple L2 and/or L3 secondary interfaces so that the client 308A can be provided, by the cloud exchange provider, with a one-to-many connection to the service peering center 301 and one or more other networks attached to cloud exchange point 303. [0051] [0051] To create an L2 interconnect between a customer network 308 and a service peering center 301, in some examples the IP/MLS fabric 318 is configured with an L2 bridge domain (e.g. a L2 virtual private network (L2VPN) such as a virtual private LAN (VPLS), e-LINE, or e-LAN service) to link L2 Traffic between a client facing port of PEs 302 and a port facing the service peering center of 304A. In some cases, service peering center 301 and one or more customer networks 308 may have access links to the same PE router 302, 304, which bridges L2 traffic using the bridging domain. To create an L3 interconnect between a client network 308 and the service peering center 301, in some examples, the IP/MLS fabric 318 is configured with L3 and Virtual Routing and Forwarding Instances (VRFs). [0052] [0052] In some examples of a cloud exchange point 303, any of the access links 316 and aggregation links 322 may represent Network-to-Network Interface (NNI) links. Additional details of NNI links and the provision of NNI links to facilitate Layer 2 connectivity within a data center 300 are found in United States Patent No. provisioning for a carrier Ethernet exchange”, which is hereby fully incorporated by reference. [0053] [0053] In this example, the 308C customer network is not an autonomous system that has an autonomous system number. The 308C customer network may represent an enterprise, network service provider, or other customer network that is within the busy routing area of the cloud exchange point. The client network includes a client edge device (CE) 311 that can perform external port-to-point routing protocols with the PE router 302B over the access link 316C. In various examples, any of the PEs 309A-309B may alternatively be or otherwise represent CE devices. The 308A-308B customer networks may or may not be autonomous systems having a number of autonomous systems. [0054] [0054] Access links 316 include physical links and may include one or more intermediate switching devices. CE device 311 and PE routers 302A-302B exchange L2/L3 packets over access links 316. In this regard, access links 316 constitute transport links for cloud access through cloud exchange point 303. [0055] [0055] Cloud exchange point 303, in some examples, aggregates access of clients 308 to cloud exchange point 303 to other networks coupled to cloud exchange point 303. Figures 2A-2B, for example, illustrate the access links 316A-316B connecting the respective client networks 308A-308B to the PE router 302A of the cloud exchange point 303 and the access link 316C connecting the client network 308C to the PE router 302B. Any one or more PE routers 302, 304 can understand ASBRs. The PE routers 302, 304 and the IP/MLS fabric 318 can be configured according to the techniques described herein to interconnect any of the access links 316 to the access link 322. As a result, the cloud service provider network 320A, for example, need only have configured a single aggregated cloud link (here, access link 322A) in order to provide services to multiple 308 customer networks. cloud services 302A does not need to provide and configure separate service links from the service peering center 303 to PE routers 309, 311, for example, in order to provide services to each of the customer networks 308. cloud 303 may instead interconnect access link 322 coupled to PE 304A and service peering center 301 to multiple cloud access links 316, so as to provide network reachability and peering 3 layer for service traffic between any of the customer networks 308 and the service peering center 301. [0056] [0056] Also, a single client network, e.g. client network 308A, only needs to have configured a single cloud access link (here, access link 316A) to the cloud exchange point 303 within the data center 300, in order for service peering center 301 to provide services to customer networks 308, also coupled to cloud exchange point 303. That is, the operator of service peering center 301 does not need to provide and configuring separate service links connecting the service peering center 301 in order to provide service establishment to multiple customer networks [0057] [0057] In some cases, service peering switch 301 may be coupled to a PE router (not shown) that is coupled to access link 322. The PE router may run an external port routing protocol, for example e.g. eBGP, to exchange routes with the cloud exchange point PE 304A router [0058] [0058] In the illustrated example, an Internet/Multimedia (IP/MPLS) Label Switching (IP/MPLS) structure 318 interconnects the PEs 302 and the PE 304A. The IP/MLS fabric 318 includes one or more switching and routing devices, including PEs 302, 304A, which provide IP/MPLS switching and IP packet routing to form an IP backbone. In some examples, the IP/MLS 318 framework may implement one or more different tunneling protocols (that is, in addition to MPLS) to route traffic between PE routers and/or associate traffic with different IP-VPNs. In accordance with the techniques described here, the IP/MPLS framework 318 implements IP Virtual Private Networks (IP-VPNs) to connect any of the clients 308 with the service peering center 301 to provide data center-based data transport. and Layer 3. While IP core service provider-based networks require bandwidth-limited wide area network (WAN) connections to carry service traffic from Layer 3 service providers to customers, the point of cloud exchange 303, as described herein, carries the service traffic and interconnects the service service exchange 301 to clients 308 within the high-bandwidth on-premises environment of the data center 300 provided via an IP/MPLS-based backbone. in data center 318. In some examples, the IP/MPLS framework 318 implements IP-VPNs using techniques described in Rosen & Rekhter, “BGP/MPLS IP Virtual Private Networks (VPNs) ”, Request for Comments 4364, February 2006, Internet Engineering Task Force (IETF) Network Working Group, the entire contents of which are incorporated herein by reference. In some exemplary configurations, the client network 308 and the service peering center 301 may connect via respective links to the same IP/MPLS backbone PE router 318. [0059] [0059] Access links 316 and access link 322 may include link circuits that associate traffic, exchanged with the connected customer network 308 or service peering center 301, with the virtual forwarding and forwarding instances ( VRFs) configured on PEs 302, 304A and corresponding to IP-VPNs that operate over the IP/MPLS 318 backbone. For example, PE 302a can exchange IP packets with PE 310A on a bidirectional label-switched path (LSP) operating across of the access link 316A, the LSP being a link circuit for a VRF configured on the PE 302A. As another example, the PE 304 a can exchange IP packets with a PE device or a network switch for service pairing 301 on a Label-Switched (LSP) or VLAN Bidirectional path Operating over the access link 322, the LSP or VLAN being a link circuit for a VRF configured on the PE 304A. Each VRF can include or represent a different routing and forwarding table with different routes. [0060] [0060] The PE routers 302, 304 of the IP/MLS 318 fabric can be configured in respective hub-and-spoke arrangements for cloud services, with PE 304A implementing a hub and PEs 302 being configured as hub spokes (for multiple hub-and-spoke instances/arrays). A hub-and-spokes arrangement ensures that service traffic is enabled to flow between a PE hub and any of the spokes, but not between the different spokes. Hub-and-spokes VPNs can in this way allow complete separation between customer networks 308. As described further below, in a hub-and-spokes layout for data center-based IP/MLS fabric 318 and for the PEs 302 advertisement routes of customer-bound service traffic (i.e., from the peer service center 301 to a network of customers 308), received from the PEs 309, 311, to the PE 304A. For service traffic connected to the service peering center (i.e., from a customer network 308 to the service peering center 301), the PE 304A advertises the routes to the service peering center 301 to the PEs 302, which advertise routes to PEs 309, CE 311. As used here, a hub VRF exports routes with an “upward” route target (RT) whereas a spoke VRF imports routes that have an “upward” route target. ”. Conversely, a spoke VRF exports routes that have a “downstream” route target whereas a hub VRF imports routes that have a “downstream” route target. In some examples, each VRF instance has a singular route distinguisher (RD). [0061] [0061] For some 303 cloud exchange point customers, the 303 cloud exchange point provider can configure a full mesh array whereby a set of PEs 302, 304A is coupled to a network of different customers for the customer. In such cases, the IP/MLS 318 fabric implements a Layer 3 VPN (L3VPN) for frame-to-frame or redundancy traffic (also known as east-west or horizontal traffic). L3VPN can effect a closed user pool through which each client location network can send traffic from one to another, but cannot send or receive traffic outside of L3VPN. [0062] [0062] In some examples, PE routers can couple to each other according to a peering model without using overlay networks. That is, PEs 309, CE 311, and a network to service peering center 301 may not directly peer with each other to exchange routes, but instead indirectly exchange routes through the structure. [0063] [0063] Each virtual circuit 324 can be implemented using a different hub-and-spokes network configured on the IP/MPLS network 301 having PE routers 302, 304 an exchange of routes using a full or partial mesh of access sessions of Edge Port Protocol, in this example a full mesh of Multi-Protocol Internal Edge Port Protocol (MP-iBGP) sessions. MP-iBGP or simply MP-BGP is an example of a protocol through which routers exchange labeled routes to implement MLS-based VPNs. However, PEs 302, 304 may exchange routes to implement IP-VPNs using other techniques and/or protocols. [0064] [0064] In the 324A virtual circuit example, the 304A PE can associate a route to reach the service peering center with a hub-and-spokes network, which can have an associated VRF, which includes a PE 302A spoke router . PE 304A then exports the route to PE router 302A; the PE router 304A can export the route specifying the PE router 304A as the next hop router, along with a label that identifies the hub-and-spokes network. The PE router 302A sends the route to the PE router 309B through a routing protocol connection with the PE 309B. The PE router 302A can forward the route after adding an autonomous system number from the cloud exchange point 303 (for example, to a BGP autonomous system path attribute (AS_PATH)) and specifying the PE router 302A as the next hop router. Cloud exchange point 303 is thus an autonomous system “leap” on the autonomous systems path from customers 308 to cloud service providers 320 (and vice versa), even though cloud exchange point 303 can be based in a data center. The PE Router 310B installs the route to a routing database, such as a BGP Routing Information Base (RIB) to provide Layer 3 reachability to the Service Peering Center. [0065] [0065] PE routers 309B, 302A and 304A can perform a similar operation in the reverse direction for forward routes originating through the client network 308B to PE 304A and thus provide connectivity from the service peering center 301 to the 308B client network. In the example of virtual circuit 324A, PE routers 309A, 304A, and 302A exchange routes to customer network 308A and service peering center 301 in a manner similar to that described above for establishing virtual circuit 324B. As a result, the cloud exchange point 303 within the data center 300 may incorporate the peering connections that may otherwise be established between a network device for the service peering center 301 and each of the PEs 309A, 309B , in order to perform the aggregation of services provided through the service pairing center 301 to the multiple client network 308, through a single access ink 322 to the cloud exchange point 303. In the absence of the techniques described herein , service interconnection customer networks 308 and service peering center 301 require peering connections between each of the PEs 309, CE 311, and a network device to the service peering center 301. With the techniques described herein , cloud exchange point 303 can fully interconnect customer networks 308 and service peering center 301 with an edge device access connection. and local (ie, for each of the PEs 309, CE 311, and the network device for exchanging services 301) by incorporating Layer 3, and providing data center-based "transport" between the access interfaces. [0066] [0066] In instances where the IP/MLS 318 fabric implements BGP/MPLS IP-VPNs or other IP-VPNs that use route targets to control route distribution within the IP backbone, the PE 304A can be configured to import routes from PEs 302 and for exporting routes to service peering center 301 using different asymmetric route targets. Likewise, PEs 302 can be configured to import routes from PE 304A and to export routes received from PEs 309, CE 311, using asymmetric route targets. Thus, PEs 302, 304A can be configured to implement advanced L3VPNs that each include an L3VPN backbone of the IP/MPLS backbone 318 along with extranets from any of the client networks 308 and service peering hub 301 attached to the backbone. L3VPN Each advanced L3VPN constitutes a cloud service delivery network from the service peering hub 301 to one or more customer networks 308, and vice versa. In this way, the cloud exchange point 303 enables the service peering hub 301 to exchange service traffic with any customer network 308 while internalizing the Layer 3 routing protocol establishment connections that would otherwise be established between peers. of client networks 308 and a service pairing network 301 for a service connection between a given peer. In other words, the cloud exchange point 303 allows each of the client networks 308 and a service peering network 301 to establish a single (or more for redundancy or other reasons) layer 3 routing protocol connection network. for data center-based Layer 3 interconnect. By filtering routes from the network to the service peering center 301 to the customer networks 308, and vice versa, the PEs 302, 304A thus control the establishment of virtual circuits 324 and the flow of associated service traffic between customer networks 308 and service peering center 301 within a data center 300. Distributed routes for the MP-iBGP mesh 318 may be VPN-IPv4 routes and may be associated with route providers to distinguish routes from different locations having overlapping address spaces. [0067] [0067] Additional details of an exemplary interconnect platform and programmable network platform for configuring a cloud exchange point are described in United States Patent Application No. 15/001,766, [0068] [0068] Service peering switch 301 exposes service peering endpoints 306A-306C reachable from cloud exchange point 303 via access link 322 with PE 304A. Service exchange endpoints 306 are also reachable from any customer network 308 coupled to cloud exchange point 303 and having a virtual circuit 324 for interconnecting with service peering 301. Service peering endpoints 306 may represent exemplary instances of service peering endpoints 106. Service peering exchange 301 stores configuration data in the form of a service map 320 that maps service exchange endpoints 306 to respective endpoints 314 to access applications 310 through service portals 312. For example, service map 320 may map service exchange endpoint 306A to service exchange endpoint 314A, service exchange endpoint 314A map services 306B to service termination point 314B, and map service exchange endpoint 306C to point d and 314C service termination. Service map 320 may represent an associative data structure, such as a table, map, or dictionary. [0069] [0069] Service peering switch 301 receives service requests from cloud exchange point 303 via access link 322 and determines corresponding destination service endpoints 314 for service requests using the service map services 320. In the example of Figure 2B, application 310 originates a service request 325A destined for service exchange endpoint 306B. Client network 308A issues service request 325A to cloud exchange point 303 via access link 316A on virtual circuit 324A. Cloud exchange point 303 forwards service request 325A using virtual circuit 324A to service peering center 301, which receives service request 325 over access link 322. [0070] [0070] Service peering switch 301 receives service request 325A at service exchange endpoint 306B. [0071] [0071] Service peering switch 301 issues service request 325A' over access link 322. Cloud exchange point 303 determines that service request 325A' is destined for service endpoint 314B and must be sent using the 324B virtual circuit. The service peering center 301 can issue the service request 325A' on a VLAN or other display circuit to an IP-VPN or other virtual network with the client network 308B. The cloud exchange 303 may route the service request 325A' using the virtual circuit 324B based in part on the link circuit on which the PE 304A receives the service request 325A'. Client network 308B receives service request 325A' from cloud exchange point 303 via access link 316B. [0072] [0072] Service gateway 312B receives service request 325A' at service termination point 314B. Service gateway 312B sends at least service data from service request 325A' to application 310B for processing. [0073] [0073] Service peering exchange 301 may host a transport-layer session (e.g. TCP) between service peering exchange 301 and a service application instance 310A and a transport-layer session between exchange service pairing 301 and an application service instance 310B. Service Peering Center 301 may also host connectionless communications (e.g., UDP) between Service Peering Center 301 and a Service Application Instance 310A and connectionless communications between Service Peering Center 301 and an instance application service 310B. In this way, the service pairing center 301 creates a service-to-service path between a service instance for the application 310A and a service instance for the application 310B, despite the client networks 108A, 108B having no network connectivity. with each other, at least through the cloud exchange point 303. The service instance for the application 310A and the service instance for the application 310B may exchange service traffic through the service-to-service pathway that includes peering of 301 service services. [0074] [0074] The service peering center 301 may, in some cases, apply policies to control the direct link (layer 2 forwarding) of service requests between service endpoints 314 to corresponding applications 310. In such cases, service peering center can avoid service level mapping and arriving where service portals 312 have network reachability to each other through service peering center 301. For example, a service request originated by application 310A and specifying the service endpoint 314C may be received at the service peering center 301 operating as a network bridge to the customer network 308C. The Service Peering Center 301 applies the 332 policies, as described below, to determine whether the service request is allowed. If so, the service peering center 301 forwards the service request to the service endpoint 314. If not, the service peering center 301 discards the service request. [0075] [0075] In some examples, as part of routing service requests, the service peering center may orchestrate sessions to provision service chains for services. [0076] [0076] Policies 332 allow the segmentation of services among the applications 310 running over the customer networks 308. That is, the service exchange of service 301 determines, based on the policies 332, those sets (e.g., pairs) of applications 310 for which service exchange 301 will provide a service-to-service pathway by delivering service requests and service responses to one another. In this way, policies 332 prevent the service peering center 301 from providing visibility into service traffic through service portals 312, in addition to service traffic directed to each service portal. In addition, each service portal 312 can only make service requests to other service portals 312, as permitted by policies 332. An administrator or operator for the service peering center 301 can also configure policies 332. Policies 332 can further specify a frequency, quantity, permissible dates and times, or other properties of one or more service requests between a pair of service portals 312. In this way, the service peering center 301 that enforces the policies 332 operates as a mediator between 310 applications to secure and control service flows. In some examples, the 330 customer portal provides self-service automation for customers to configure 332 policies. In some examples, a configuration API is exposed through the 301 service pairing center to provide self-service automation for customers to configure 332 policies. . [0077] [0077] For example, the customer portal 330 represents an application that can provide a user interface for customers to configure service pairing operations 301, in particular to configure the service map 320 and policies 332. The customer portal 330 may provide a web interface or other graphical user interface accessible through a web site to configure policies 332. One or more computing devices, such as real servers, run the customer portal 330. An operator for the 301 service pairing center can also use a 330 customer portal to configure the 320 service map and 332 policies. [0078] [0078] Policies 332 may include policies for security, mediation, routing, service transformation, load balancing and service regulation, for example. 332 policies can be client-specific (that is, set for a specific client, or global). Security policies include policies for authentication, authorization, validation and encryption, for example. For example, policies 332 may require the service portal 301 to authorize service requests using previously obtained credentials or connection tokens, using a security protocol such as OAusth 2.0, X.509, Kerberos, or a username and password. Security policies may also determine whether a user, service office, or service gateway 312 of one of the customer networks 308 is authorized to issue service requests to a service gateway 312 (or service endpoint 314) of another of the client networks 308. The application of policies 332 for load balancing may include dynamic mapping of service names to service destinations that are instances of a port box. [0079] [0079] Offloading policy enforcement from service portals 312, or more broadly from customer networks 308, can provide the technical advantage of improving the value and/or scalability of services on the overall network. For example, instead of each service portal 312 applying security services, such as distributed service protection (DDoS) to the service portal, the service pairing center 301 can apply DDoS protection to the various portals. services. Other services may include regulation, validation, other security services, mediation, load balancing, and routing. This technical advantage can be particularly advantageous for small-scale customers who are unable to invest significant resources in network infrastructure. [0080] [0080] Policy routing of policies 332 causes the service peering switch 301 to direct matching service requests with particular, target service endpoints. Although illustrated as a separate data structure, the service map 320 can, in some cases, be performed using policies 332. Routing policies can combine service requests based on application data in the same, the originator of the request. service, and destination service exchange endpoint 306, for example. Policy 332 service regulation policies can expedite service requests to a customer based on the service, the originator of the service requests, or other criteria. Load balancing can be applied by service portals 312 to service requests received at service endpoints 314. [0081] [0081] While described with respect to service peering exchange 301 of Figures 2A-2B, policies 332 may be enforced by other service peering exchanges described in this description. [0082] [0082] Service map 320 maps service peering endpoints 306 to respective service endpoints 314 for access from client networks 308 to remote applications 310 via service gateways 312. As noted above, the 320 service map can be performed using routing directives from 332 policies. [0083] [0083] In the example of Figure 2B, service peering switch 301 can apply policies 332 to authenticate and/or authorize service gateway 312A or application 310A to send service requests to service endpoint 314 through service exchange endpoint 306B. The service pairing center 301 may return an authorization token to the authorized entity. The 325A service request may include an authorization token or other credential. Service peering center 301 may apply policies 332 and service map 320 to service request 325A received at service exchange endpoint 306B to authorize, throttle, route, a representation of service request 325A to the service endpoint 314B as the service request 325A'. [0084] [0084] Figure 3 is a block diagram illustrating an exemplary service exchange system, according to the techniques of this description. The service exchange system 400 includes a service pairing hub platform 401 in communication with an exchange 410, as well as multiple service gateways 440A-440N in communication with the exchange 410. The exchange 410 may represent an Internet exchange, an Ethernet switch, or a cloud switch, such as the 303 cloud exchange point, which may be managed by a data center provider for a data center in which customer networks for the 440 service portals are co-located to exchange network traffic with other customer networks. [0085] [0085] The service peering center platform 401 can provide a service peering center as described in figures 1, 2A, 2B. The service pairing center 401 may represent a real or virtual server or a group of real or virtual servers and/or network devices coupled together in communication using the network. The 401 Service Peering Center can be hosted on a public, private, or hybrid cloud. The service peering hub platform 401 may include one or more communication units 402, one or more input devices 404, and one or more output devices 406. The service peering hub platform 401 includes one or more processors 412 and one or more storage devices 440. One or more of the devices, modules, storage areas, or other components of the service peering hub platform 401 may be interconnected to allow communications between components (physically, communicatively, and/or operationally). ). In some examples, such connectivity may be provided through a system bus, network connection, interprocess communication data structure, or any other method of communicating data. The service peering hub application 422 can be executed in a distributed manner across multiple servers, of which the peering hub platform 401 is an example. The servers running the 422 service peering center application may include one or more bare metal servers, virtual machines, containers, or other execution environments. [0086] [0086] One or more input devices 404 of the service peering hub platform 401 can generate, receive, or process input. Such input may include input from a keyboard, pointing device, voice-sensitive system, video camera, button, sensor, mobile device, control board, microphone, presence-sensitive screen, network, or any other type of device to detect the entry of a human or machine. [0087] [0087] One or more output devices 406 of the service peering hub platform 401 can generate, transmit, or process output. Examples of output are tactile, audio, visual and/or video data. The 406 output devices may include a display, a sound card, a video graphics adapter card, a speaker, a presence-sensitive screen, one or more USB interfaces, video and/or audio output interfaces, or any other type of device capable of generating audio, audio, video or other output. The 406 output devices can include a display device, which can function as an output device using technologies including liquid crystal displays (LCD), quantum dot displays, dot matrix displays, light emitting diode (LED) displays ), organic light-emitting diode (OLED) displays, cathode spoke tube (CRT) displays, electronic ink images, or monochrome, [0088] [0088] One or more communication units 402 of the service peering center platform 401 can communicate with devices external to the service peering center platform 401 through the transmission and/or reception of data, and may operate, in some aspects, both as an input device and an output device. In some examples, communication units 402 can communicate with other devices over a network, including service portals 440 through switch 410. In other examples, communication units 402 can send and/or receive radio signals in a radio network such as a cellular radio network. In other examples, communication units 402 of service pairing hub platform 401 may transmit and/or receive satellite signals on a satellite network, such as a Global Positioning System (GPS) network. Examples of communication units 402 include a network interface card (e.g., such as an Ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send and/or receive information. Other examples of 402 communication units may include Bluetooth®, GPS, 3G, 4G and Wi-Fi® radios found in mobile devices as well as Universal Serial Bus (USB) controllers and the like. [0089] [0089] One or more processors 412 of the service peering hub platform 401 may implement functionality and/or execute instructions. Examples of 412 processors include microprocessors, application processors, display controllers, auxiliary processors, one or more sensor hubs, and any other hardware configured to function as a processor, processing unit, processing device, or processing circuit. . Service peering platform 401 may use one or more processors 412 to perform operations in accordance with one or more aspects of the present invention using software, hardware, firmware, or a mixture of hardware, software, and firmware stored by and/or or running on the 401 service peering hub platform. [0090] [0090] One or more storage devices 420 may store information for processing during the operation of the service pairing center platform 401. In some examples, one or more storage devices 420 are buffers, meaning that a primary purpose of a or more storage devices is not long-term storage. Storage devices 420 can be configured for short-term storage of information as volatile memory and therefore do not retain stored content if turned off. Examples of volatile memory include random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM) and other forms of volatile memory known in the art. Storage devices 420, in some examples, also include one or more computer-readable storage media. Storage devices 420 can be configured to store larger amounts of information than volatile memory. Storage devices 420 may further be configured for long-term storage of information such as non-volatile memory space and retaining information after on/off cycles. Examples of non-volatile memory include magnetic hard disks, optical disks, floppy disks, Flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable memories (EEPROM). Storage devices 420 may store program instructions and/or data associated with one or more of the modules described in accordance with one or more aspects of this disclosure. [0091] [0091] One or more processors 412 and one or more storage devices 420 may provide an operating environment or platform for one or more modules, which may be implemented as software, but may in some instances include any combination of hardware, firmware and software . One or more processors 412 can execute instructions and one or more storage devices 420 can store instructions and/or data from one or more modules. The combination of processors 412 and storage devices 420 can retrieve, store and/or execute the instructions and/or data of one or more applications, modules, or software. Processors 412 and/or storage devices 420 may also be operatively coupled to one or more other software and/or hardware components, including, but not limited to, one or more of the components illustrated in Figure 3. [0092] [0092] One or more modules or applications illustrated in Figure 3 as being included within storage devices 420 (or modules otherwise described herein) may perform described operations using software, hardware, firmware, or a mixture of hardware, software and firmware residing on and/or running on the 401 service peering hub platform. The 401 service peering hub platform can run each of the module(s) with multiple processors or multiple devices. The 401 service peering hub platform can run one or more of these modules, or as a virtual machine or container that runs on the underlying hardware. One or more of these modules can run as one or more services of a 431 operating system or computing platform. One or more of these modules can run as one or more executable programs in an application layer of an operating platform provided by the 431 operating system. [0093] [0093] User interface module 435 can manage user interactions with one or more user interface devices, which may include one or more input devices 404 and one or more of output devices 406. In some examples, the service pairing center platform 401 may include a presence-sensitive display which may serve as a user interface device and which may be considered an incoming device 404 and an output device 406. In some examples, the interface module 435 may act as an intermediary between various components of the service peering hub platform 401 to make determinations based on user input detected by one or more user interface devices and/or one or more inbound devices 404 and generate output to a user interface device or one or more 406 output devices. [0094] [0094] User interface module 435 may receive instructions from an application, service, platform, or other service pairing center platform module 401 to make a user interface device (e.g., presence) issue a user interface. User interface module 435 is illustrated as a service pairing center application module 422, but user interface module 436 can often be or perform a minor component of a platform operating system control operation. peering hub 401, and user interface module 435 may alternatively or also be a standalone application, service, or module running on a peering hub platform 401. User interface module 435 may manage input received through the service peering hub platform 401 as a user view and interact with a presented user interface and updating the user interface in response to receiving additional instructions from the application, service, platform, or another 401 service peering center platform module that is processing input user rate. As further described below, user interface module 435 may issue a gateway and receive incoming data (e.g., exchange rate data) from incoming devices 404 accessible by a customer or peering hub platform administrator. 401 services to specify and/or manipulate 432 policies. [0095] [0095] Storage devices 420 may include an operating system 431 and a service pairing center application 422 for performing operations related to providing a service pairing center for exchanging service requests between service portals 440. The central service peering application 422 may interact and/or operate in conjunction with one or more modules of the central service peering platform 401. The central service peering application 422 may listen for network packets at points peering endpoint of the service peering hub platform 401. The operating system 431 can run a network stack and deliver network packets destined to the service peering endpoints for the service peering application [0096] [0096] A user can invoke user interface 436 or application programming interface 439 for service peering center application 422 to configure policies 432. Policies 432 can represent examples of instances of policies 332. Units 402 can receive service termination data that describes one or more service termination points to one or more service termination points. The one or more processors 412 running the service peering switch application 422 processes the service termination data, requests the service peering endpoints from the operating system 431, and maps the service peering endpoints to the corresponding service termination points indicated by the service termination data. Processors 412 generate service map 450 for mappings from service peering endpoints to service endpoints, and vice versa. With these exemplary operations, the service peering office 422 application enables the service peering office platform 401 to operate as an application services framework that performs routing of services between service endpoints. The application services framework may extend across one or more computing devices and/or virtual and/or virtual network that make up the 401 service peering hub platform. [0097] [0097] The service exchange platform 401 receives a service request 425a from a customer network associated with the service portal 440A, through the communication unit 402 and the Central network 410. The service request 425A is intended for to a service exchange endpoint of service peering 401. Service request 425A may represent an exemplary instance of any of the service requests described in this report. Operating system 431 hands service request 425A to service peering center application 422, which listens to service exchange endpoint, for processing. The 425A service request may amount to one or more packets. [0098] [0098] In accordance with one or more aspects of the present invention, one or more processors 412 running the service pairing center application 422 processes the service request 425 a through policy enforcement 332 to output a representation from service request 425A to a service endpoint of service portal 440B. Processors 412 apply service map 450 to map the target service exchange endpoint of service request 425a to a service endpoint of service gateway 440B. A service map 450 may match the destination network address and destination port of one or more packets from the service request 425A and specify a service gateway destination endpoint 440B. In response, processors 412 generate service request 425A' having a destination network address and destination port which are the specified destination endpoint of service gateway 440B. The processors 412 can generate the service request 425A' to have a source network address and source port which is a service exchange endpoint of the service peering exchange platform 401. In this way, the peering exchange platform service portal 401 customizes service portal 440a to service portal 440B. The processors 412 send, through the communication unit 402, the service request 425A' for delivery through the central network 410. [0099] [0099] Figure 4 is an exemplary service map, according to the techniques of this description. Service map 450 is an associative data structure having multiple entries 452A-452D that each map a service exchange endpoint from a service peering center to a service endpoint for an application, such as a service endpoint exposed by a service portal, and vice versa. For example, input 452 from a map service endpoint 106a to service endpoint 114A. Service map 450 may store each service endpoint and service exchange endpoint as combinations of a network address and transport layer port. Service map 450 may include a hash table such that entries 452 are hash deposits with hash values corresponding to values of a hash function applied to the service endpoint or service peer endpoints, with the hash value of one service exchange endpoint being mapped to the service endpoint and the hash value of a service endpoint being mapped to a service exchange endpoint. Exemplary hash functions include SHA-1 and MD5. [0100] [0100] Figure 5 is a block diagram illustrating a conceptual view of a service exchange system having a metro-based cloud hub that provides multiple cloud exchange points for communication with a service peering hub, so according to the techniques described herein. Each of the Cloud-based Service Exchange Points 528A-528D (described hereinafter as “Cloud Exchange Points” and collectively referred to as “Cloud Exchange Points 528”) of the Cloud-Based Service Exchange 510 (“Cloud Exchange Points 528”) 510 Cloud Exchange”) may represent a different data center geographically located within the same metropolitan area (“metropolis-based”, e.g., New York City, New York; Silicon Valley, California; Seattle-Tacoma, Washington; Minneapolis -St Paul, Minnesota; [0101] [0101] Each of the 528 cloud exchange points includes network infrastructure and an operating environment through which 508A-508D clients (collectively, “508 cloud clients”) exchange service requests and service responses via exchange of services. services 101. Each of the clients 508 may have one or more service portals (not shown in Figure 5). 508 Cloud Clients can exchange service requests and service responses directly through a physical and Layer 3 peering connection to one of the 528 Cloud Exchange Points or indirectly through one of the 506A-506B Network Service Providers ( collectively, “506 NSPs”). 506 NSPs provide “cloud transit” by maintaining a physical presence within one or more of the 528 cloud exchange points and aggregating layer 3 access from one or more 508 clients. 3, directly with one or more 528 cloud exchange points and thus provide indirect layer 3 connectivity and peering with one or more 508 clients whereby the 508 clients can get cloud services from the 500 cloud exchange. [0102] [0102] Each of the cloud exchange points 528, in the example in Figure 5, can be assigned a different autonomous system number (ASN). For example, cloud exchange point 528A is designated ASN 5, cloud exchange point 528B is designated ASN 2, and so on. Each cloud exchange point 528 is thus a next hop on a path vector routing protocol (e.g., BGP) path from service peering center 101 to clients 508. As a result, each cloud exchange point 528 may, although not a transit network having one or more wide area network links and concurrent Internet access and transit policies, peer with multiple different autonomous systems via External BGP (eBGP) or other port routing protocol external service to exchange, aggregate, and route service traffic from one or more 550 cloud service providers to customers. In other words, the 528 cloud exchange points can internalize the eBGP business relationships that the 550 cloud service providers and 508 customers would continue on a peer basis. Instead, a customer 508 may set up a single eBGP business relationship with a cloud exchange point 528 and receive, through the cloud hub, multiple cloud services from one or more cloud service providers 550. Although described herein particularly with respect to eBGP or other layer 3 routing protocol between the cloud exchange points and the customer, NSP, or cloud service provider networks, cloud exchange points can learn routes from these networks from another manner, such as by static configuration, or via Routing Information Protocol (RIP), Open Shorter Path Protocol (OSPF), Intermediate system to intermediate system (IS-IS), or other route distribution protocol. Each of the cloud switching points 528 may represent an instance instance of cloud switching point 303. [0103] [0103] As examples of the above, customer 508D is illustrated as having contracted with a cloud hub provider for cloud hub 500 to directly access layer 3 cloud services through cloud exchange points 528C, 528D. In this way, client 508D receives redundant layer 3 connectivity to cloud service provider 550A, for example. The 508C client, in contrast, is illustrated as having contracted with the cloud exchange provider for the cloud exchange 500 to directly access the layer 3 cloud services through the 528C cloud exchange point and also to have contracted with the NSP 506B to access layer 3 cloud services over an NSP 506B transit network. The 508B client is illustrated as being contracted with multiple [0104] [0104] As an example, the client 508A issues a service request 525 to a service peering point exposed through the service peering center 101. The NSP 506A carries the service request 525 to the cloud exchange point 528A, which sends service request 525 to service peering exchange 101 using a Virtual Circuit between NSP 506A and service peering exchange 101. [0105] [0105] Service peering switch 101 maps the service exchange endpoint that is a service request destination 525 to a service endpoint on client 508D. The service pairing center 101 generates a new service request 525' that includes service data from the service request 525, and issues the service request 525' to the client network 508D. Cloud exchange point 528D distributes service request 525' to customer network 508D using a virtual circuit between service peering hub 101 and customer network 508D. [0106] [0106] Figure 6 is a flowchart illustrating an exemplary mode of operation 600 for a service pairing center, in accordance with the techniques of this description. Figure 6 is described for exemplary purposes with respect to the service exchange 101 of Figure 1, but the operation may be performed via any service exchange described in this disclosure. Service peering switch 101 receives service mapping data that maps service peering endpoints 106 to service endpoints of customer networks 108 (602). Service peering center 101 may store service mapping data as a service map or as one or more policies. [0107] [0107] Service peering exchange 101 receives an incoming service request 124A that is issued by a customer network device 108a and which is destined for a service exchange endpoint 106C of service peering exchange 101 (604). Service peering switch 101 determines whether service request 124A is authorized for service exchange endpoint 106C (606). If the service request 124A is not authorized (lead NOT from 608), the service pairing center 101 discards the service request 124A (608). For example, service peering switch 101 may not respond or take any action with respect to service request 124A, or may respond with an error message. If the service request 124A is authorized (SIM lead of 608), the service pairing center 101 forwards the service request. [0108] [0108] To route the service request, service peering switch 101 maps service exchange endpoint 106C to service endpoint 114C of service gateway 112C based on service mapping data (610 ). The service pairing switch 101 generates a new outbound service request 125A' (or rewrites the header data for the service request 125a to form a new outbound service request 125A') that is destined for the endpoint service 114C mapped in step 610 (612). The service pairing center 101 issues the outgoing service request 125A' on a communication link 103C with the client network 108C (614). [0109] [0109] Figure 7 is a block diagram that illustrates an example of a distributed service exchange system, according to the techniques described here. System 800 includes multiple geographically distributed data centers 810A-810B ("data centers 810") connected via a communication link through Network Service Provider 825. Data centers 810 may be located within a single metropolitan area. or located in different metropolitan areas. In this particular example architecture, each of the data centers 810 includes a corresponding cloud hub of cloud exchanges 803A-803B ("cloud exchanges 803"). However, other examples of a distributed service exchange system architecture can be implemented using different types of distributed architectures over Via or SD-WAN. Each of the 803 cloud exchanges can be an instance instance of the 303 cloud exchange point. Additional details for distributed cloud exchanges are found in United States Patent Application No. 15/475,957, filed March 31, 2017 and entitled “Inter-Metro Connectivity Network Connect”, which is incorporated herein by reference in its entirety. [0110] [0110] System 800 includes multiple distributed service pairing exchanges 801A-801B ("service pairing exchanges 801") co-located or otherwise having connectivity within respective data centers 810 and running through a hub platform peering of distributed services. In this particular exemplary architecture, service peering exchange 801A connects to service peering exchange 803A through access link 822A, and service peering exchange 801B connects to service peering exchange 803B through link access 822B. Access links 822 may represent example instances of access link 331. Although only two service peering exchanges 801 in two data centers 810 are illustrated in Figure 7, other system examples 800 may include more service peering exchanges 801 data centers located in corresponding, additional 810 data centers. [0111] [0111] 801 Distributed Service Pairing Centers operate as a distributed service pairing center to provide service exchange services across multiple locations to enable applications running on client networks in a local location to access services located at locations remote. Service peering switches 801 include shared service exchange endpoints 806 for sending and receiving service traffic with customer networks 108 over access links 822 and communication links 103 over cloud exchanges 803. service peering endpoints 806 may be examples of service peering endpoints 106. [0112] [0112] Service peering switch 801A includes service exchange endpoint 806C. The service peering center includes the 806A service exchange endpoint. Service peering switch 801A receives service request 824A (an example service request instance 124A) issued via application 110A at service exchange endpoint 806C. The service pairing switch 801A issues, in response, the corresponding outgoing service request 824A' [0113] [0113] Distributed Service Peering 801 switches can monitor latencies between 801 Service Peering switches and expose latencies via an API method. For example, service gateway 112A may request and receive, via the API method, a latency between service peer exchange 801A and service peer exchange 801B. [0114] [0114] The techniques described here can be implemented in hardware, software, firmware, or any combination thereof. Various features described as modules, units or components can be implemented in a logical, integrated device; or separately as discrete but interoperable logic devices or other hardware devices. In some cases, various electronic circuit features may be implemented as one or more integrated circuit devices, such as an integrated circuit chip or chip set. [0115] [0115] If implemented in hardware, such disclosure may be directed at an apparatus such as a processor or an integrated circuit device, such as an integrated circuit chip or chip set. Alternatively, or additionally, if implemented in software or firmware, the techniques may be performed at least in part by means of a computer-readable data storage medium comprising instructions that, when executed, cause a processor to execute one or more of the following: methods described above. For example, the computer readable data storage medium may store such instructions for execution by a processor. [0116] [0116] A computer readable medium may form part of a computer program product, which may include packaging materials. A computer readable medium may comprise a computer data storage medium such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (EEPROM) electrically erasable, Flash memory, magnetic or optical data storage media, and the like. In some examples, an article of manufacture may comprise one or more computer readable storage media. [0117] [0117] In some instances, the computer-readable storage medium may comprise non-transient media. The term “non-transient” may indicate that the storage medium is not embedded in a carrier wave or a propagated signal. In certain instances, a non-transient storage medium may store data that may, over time, change (eg, in RAM or cache). [0118] [0118] The code or instructions may be software and/or firmware executed by processing circuits including one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs). ), field programmable gate arrays (FPGAs), or other equivalent discrete or integrated logic circuits. Accordingly, the term "processor" as used herein may refer to any of the foregoing framework or any other framework suitable for implementing the techniques described herein. Furthermore, in some aspects, the functionality described in this description may be provided within software modules or hardware modules.
权利要求:
Claims (20) [1] 1. Method comprising: receiving, via a service peering center performed by one or more computing devices and at a first service exchange endpoint of the service peering exchange, a first incoming service request at from a first network of clients, wherein the first arriving service request is destined for the first service exchange endpoint, and wherein the first arriving service request may invoke an application programming interface of a first application; issuing, through the service pairing center in response to receipt of the first inbound service request, a first outbound service request destined for a service termination point of a second network of clients executing the first application, in that the first outgoing service request can invoke the application programming interface of the first application; receive, through the peering service exchange and at a second service exchange endpoint of the service peering exchange that is different from the first exchange service endpoint, a second incoming service request from the first network of clients, wherein the second incoming service request is destined for the second service exchange endpoint, and wherein the second incoming service request may invoke an application programming interface of a second application; and issuing, through the service pairing center in response to receipt of the second inbound service request, a second outbound service request destined for a service termination point of a third network of clients executing the second application, where the second outbound service request can invoke the second application's application programming interface. [2] A method as claimed in claim 1, further comprising: receiving, via the service pairing center, service termination data describing the service termination point of the second customer network executing the first application; and generating, via the service peering exchange, an association from the first service exchange endpoint to the service endpoint of the second customer network executing the first application, at which to issue the first service request. Outbound service comprises issuing the first outbound service request based at least on association. [3] The method of claim 1, wherein the first customer network has no network connectivity to the second customer network. [4] The method of claim 1, wherein each of the first incoming service request and the first outgoing service request comprises one of a Representational State Transfer (REST) communication using Hyper Text Transfer Protocol (HTTP) , a JavaScript Object Notation (JSON) - Remote Procedure Call (RPC), a Simple Object Access Protocol (SOAP) message, an Apache Thrift request, and an eXtensible Markup Language (XML)-RPC, Message Queue Telemetry Transport (MQTT) ), Rabbit Message Queue (RabbitMQ) and Constrained Application Protocol (CoAP). [5] The method of claim 1, further comprising: displaying, via a customer portal to the service pairing center, an application programming interface accessibility indication at the first service exchange point; and receiving, via the service pairing center, the service mapping data comprising an association from the first service exchange endpoint to the service endpoint of the second customer network executing the first application, where issuing the first outbound service request comprises issuing the first outbound service request based on association. [6] A method as claimed in claim 1, further comprising: receiving, via the service peering center, a discovery request that invokes a discovery application programming interface from the service peering center and requests a peering point. service termination to access the application programming interface of the first application; and issuing, through the service peering desk, a discovery response responsive to the discovery request, the discovery response indicating that the first service exchange endpoint is a service endpoint for accessing the programming interface application of the first application. [7] The method of claim 1, wherein each of the first service exchange endpoint and the second service exchange endpoint comprises a combination of a network layer address and a layer port. of the computing device transport. [8] The method of claim 1, wherein the first service endpoint comprises a service endpoint of a second customer network service portal for applications running via the second customer network, and wherein the second service termination point comprises a service termination point of a third customer network service portal for applications running via the third customer network. [9] A method according to claim 1, wherein the first client network, the second client network and the computing device each communicate over a different access link with one of a central cloud. , an Internet switch and an Ethernet switch. [10] A method as claimed in claim 1, wherein the first client network, the second client network and the computing device communicate via a different access link with a cloud hub, the method comprising further: provisioning, via a programmable network platform, a first virtual circuit in the cloud hub to create a first endpoint/edge path between the computing device and the first client network; and provisioning, via a programmable network platform, a second virtual circuit in the cloud hub to create a second endpoint/edge path between the computing device and the second client network. [11] A method according to claim 10, wherein the cloud center comprises a three-tier autonomous system (L3) located within a data center, wherein each of the access links to the cloud center comprises a link circuit for an Internet Protocol Virtual Private Network configured at the cloud hub, where the first end-to-end path includes the access link for communication between the first customer network and the cloud hub and the access link for communication between the cloud hub and the computing device, and wherein the second endpoint/edge path includes the access link for communication between the second client network and the cloud hub and the access link for communication between the cloud hub and computing device. [12] 12. A service exchange system comprising: one or more service pairing centers configured to run through a service pairing center platform comprising one or more computing devices; and a service portal for an application configured to run through the first customer network, the service portal configured to run by a computing device of the first customer network, wherein one or more service pairing centers are configured to receive, at a service exchange endpoint, an inbound service request from a second network of customers, where the inbound service request is destined for the service exchange endpoint, and where the incoming service request may invoke an application programming interface of the application configured to run through the first client network, which one or more service peering centers are configured to, in response to receipt of the service request of arrival, issue an outbound service request destined for a service endpoint of the service portal, where the Outbound service quote can invoke the application programming interface of the application configured to run through the first client network, where the service portal is configured to receive the second service request at the service termination point and forward the second service request to the application. [13] 13. A service exchange system comprising: one or more service pairing exchanges configured to run through a service pairing exchange platform comprising one or more computing devices, wherein the one or more peering exchanges services are configured to receive, at a first service exchange endpoint, a first inbound service request from a first network of customers, wherein the first inbound service request is destined for the first inbound service endpoint. exchange of services, and wherein the first arriving service request may invoke an application programming interface of a first application, wherein the one or more service pairing centers are configured to issue, in response to receipt of the first request inbound service request, a first outbound service request destined for a service termination point of a second network of clients running the first application, wherein the first outbound service request may invoke the application programming interface of the first application, wherein the one or more service peering centers are configured to receive, in a second service exchange endpoint that is different from the first service exchange endpoint, a second inbound service request from the first customer network, wherein the second inbound service request is destined for the second service exchange endpoint, and wherein the second arriving service request may invoke an application programming interface of a second application, and wherein the one or more service peering centers are configured to issue, in response upon receipt of the second inbound service request, a second outbound service request destined for a service termination point d and a third network of clients running the second application, wherein the second outgoing service request may invoke the application programming interface of the second application. [14] A service exchange system according to claim 13, wherein the one or more service pairing switches are configured to receive service termination data describing the service termination point for the first application, wherein to one or more service pairing switches are configured to generate an association from the first service exchange endpoint to the service endpoint for the first application, and wherein to issue the first outbound service request , one or more service peering switches are configured to issue the first outbound service request based on at least membership. [15] A service exchange system according to claim 13, wherein the first customer network has no network connectivity to the second customer network. [16] A service exchange system according to claim 13, wherein each of the first incoming service request and the first outgoing service request comprises one of a Representational State Transfer (REST) communication using Hyper Text Transfer Protocol (HTTP), a JavaScript Object Notation (JSON) - Remote Procedure Call (RPC), a Simple Object Access Protocol (SOAP) message, an Apache Thrift request, and an eXtensible Markup Language (XML)-RPC, Message Queue Telemetry Transport (MQTT), Rabbit Message Queue (RabbitMQ) and Constrained Application Protocol (CoAP). [17] A service exchange system according to claim 13, further comprising: a customer portal to one or more service pairing centers, the customer portal configured to display an application programming interface accessibility indication on the first service exchange point, wherein the one or more service pairing switches are configured to receive service mapping data comprising an association from the first service exchange endpoint to the service endpoint for the first application, and wherein the one or more service pairing switches are configured to issue the first outbound service request comprising issuing the first outbound service request based on the association. [18] A service exchange system according to claim 13, wherein the one or more service peering centers are configured to receive a discovery request that invokes a discovery application programming interface of the service peering center. and requests a service endpoint to access the application programming interface of the first application, and wherein the one or more service peering centers are configured to issue a discovery response responsive to the discovery request, the discovery response indicating that the first service exchange endpoint is a service endpoint to access the application programming interface of the first application. [19] The service exchange system of claim 13, wherein each of the first service exchange endpoint and the second service exchange endpoint comprises a combination of a network layer address and a transport layer port. [20] The service exchange system of claim 13, wherein the first service termination point comprises a service termination point of a second customer network's service portal for applications running via the second network. of customers, and wherein the second service termination point comprises a service termination point of a third customer network service portal for applications running through the third customer network.
类似技术:
公开号 | 公开日 | 专利标题 BR112019026003A2|2020-06-23|SERVICE PAIRING CENTER US10992576B1|2021-04-27|Virtual performance hub US9712435B2|2017-07-18|Cloud-based services exchange US10454821B2|2019-10-22|Creating and maintaining segment routed traffic engineering policies via border gateway protocol US8954591B2|2015-02-10|Resource negotiation for cloud services using a messaging and presence protocol US8955100B2|2015-02-10|Routing device having integrated MPLS-aware firewall US8316435B1|2012-11-20|Routing device having integrated MPLS-aware firewall with virtual security system support US7889738B2|2011-02-15|Shared application inter-working with virtual private networks US10756928B2|2020-08-25|Interconnection between enterprise network customers and network-based cloud service providers CN107872392B|2021-06-18|Method and device for distributing service function chain data and service function instance data US20180375760A1|2018-12-27|System, apparatus and method for providing a virtual network edge and overlay with virtual control plane Lasserre et al.2014|Framework for data center | network virtualization US9762746B2|2017-09-12|Advice of charge in content centric networks EP3151477B1|2020-03-04|Fast path content delivery over metro access networks US10523631B1|2019-12-31|Communities of interest in a cloud exchange US10771252B1|2020-09-08|Data center security services US11228573B1|2022-01-18|Application programming interface exchange US11218424B1|2022-01-04|Remote port for network connectivity for non-colocated customers of a cloud exchange US20210359948A1|2021-11-18|Virtual gateways in a cloud exchange US20220070147A1|2022-03-03|Virtual domains within a shared device US20210084068A1|2021-03-18|Distributed denial-of-service mitigation Bitar et al.2014|Internet Engineering Task Force | M. Lasserre Request for Comments: 7365 F. Balus Category: Informational Alcatel-Lucent
同族专利:
公开号 | 公开日 EP3639505A1|2020-04-22| AU2021254533A1|2021-11-18| AU2018285865B2|2021-07-22| WO2018232022A1|2018-12-20| US20210314411A1|2021-10-07| US20180359323A1|2018-12-13| CN110809875A|2020-02-18| CA3066459A1|2018-12-20| US11044326B2|2021-06-22| AU2018285865A1|2019-12-19|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US9130897B2|2003-09-30|2015-09-08|Ca, Inc.|System and method for securing web services| GB0426202D0|2004-11-30|2004-12-29|Ibm|A method, system and computer program for addressing a web service| US8583503B2|2009-09-04|2013-11-12|Equinix, Inc.|On line web portal for private network service providers| WO2011117261A2|2010-03-22|2011-09-29|Data Connection Limited|System for connecting applications to networks| US8965957B2|2010-12-15|2015-02-24|Sap Se|Service delivery framework| US9886267B2|2014-10-30|2018-02-06|Equinix, Inc.|Interconnection platform for real-time configuration and management of a cloud-based services exchange| US9930103B2|2015-04-08|2018-03-27|Amazon Technologies, Inc.|Endpoint management system providing an application programming interface proxy service| US9948552B2|2015-04-17|2018-04-17|Equinix, Inc.|Cloud-based services exchange| US10237355B2|2015-05-12|2019-03-19|Equinix, Inc.|Software-controlled cloud exchange| WO2016183253A1|2015-05-12|2016-11-17|Equinix, Inc.|Programmable network platform for a cloud-based services exchange|US11012431B2|2018-07-31|2021-05-18|International Business Machines Corporation|Secure sharing of peering connection parameters between cloud providers and network providers| US11159569B2|2018-08-20|2021-10-26|Cisco Technology, Inc.|Elastic policy scaling in multi-cloud fabrics| US10848423B1|2018-09-26|2020-11-24|Amazon Technologies, Inc.|Multi-account gateway| US10911374B1|2018-09-28|2021-02-02|Riverbed Technology, Inc.|Software defined wide area networkenabled network fabric for containers| US11165697B2|2018-11-16|2021-11-02|Juniper Networks, Inc.|Network controller subclusters for distributed compute deployments| US10897420B1|2018-12-28|2021-01-19|Juniper Networks, Inc.|Service chaining among devices of interconnected topology| US11048544B2|2019-04-08|2021-06-29|Sap Se|Cloud resource credential provisioning for services running in virtual machines and containers| US20200344124A1|2019-04-25|2020-10-29|Juniper Networks, Inc.|Multi-cluster configuration controller for software defined networks| US20210289047A1|2020-03-11|2021-09-16|Microsoft Technology Licensing, Llc|Systems and methods for establishing highly secure & resilient persistent communication connections| US11218407B2|2020-04-28|2022-01-04|Ciena Corporation|Populating capacity-limited forwarding tables in routers to maintain loop-free routing| KR102288944B1|2021-04-06|2021-08-12|박철우|System and method for controlling voice communications in a fixed mobile convergence using MQTT protocol|
法律状态:
2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762518992P| true| 2017-06-13|2017-06-13| US62/518,992|2017-06-13| PCT/US2018/037389|WO2018232022A1|2017-06-13|2018-06-13|Service peering exchange| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|